7 research outputs found

    On the Robustness of Dataset Inference

    Full text link
    Machine learning (ML) models are costly to train as they can require a significant amount of data, computational resources and technical expertise. Thus, they constitute valuable intellectual property that needs protection from adversaries wanting to steal them. Ownership verification techniques allow the victims of model stealing attacks to demonstrate that a suspect model was in fact stolen from theirs. Although a number of ownership verification techniques based on watermarking or fingerprinting have been proposed, most of them fall short either in terms of security guarantees (well-equipped adversaries can evade verification) or computational cost. A fingerprinting technique, Dataset Inference (DI), has been shown to offer better robustness and efficiency than prior methods. The authors of DI provided a correctness proof for linear (suspect) models. However, in a subspace of the same setting, we prove that DI suffers from high false positives (FPs) -- it can incorrectly identify an independent model trained with non-overlapping data from the same distribution as stolen. We further prove that DI also triggers FPs in realistic, non-linear suspect models. We then confirm empirically that DI in the black-box setting leads to FPs, with high confidence. Second, we show that DI also suffers from false negatives (FNs) -- an adversary can fool DI (at the cost of incurring some accuracy loss) by regularising a stolen model's decision boundaries using adversarial training, thereby leading to an FN. To this end, we demonstrate that black-box DI fails to identify a model adversarially trained from a stolen dataset -- the setting where DI is the hardest to evade. Finally, we discuss the implications of our findings, the viability of fingerprinting-based ownership verification in general, and suggest directions for future work.Comment: 19 pages; Accepted to Transactions on Machine Learning Research 06/202

    False Claims against Model Ownership Resolution

    Full text link
    Deep neural network (DNN) models are valuable intellectual property of model owners, constituting a competitive advantage. Therefore, it is crucial to develop techniques to protect against model theft. Model ownership resolution (MOR) is a class of techniques that can deter model theft. A MOR scheme enables an accuser to assert an ownership claim for a suspect model by presenting evidence, such as a watermark or fingerprint, to show that the suspect model was stolen or derived from a source model owned by the accuser. Most of the existing MOR schemes prioritize robustness against malicious suspects, ensuring that the accuser will win if the suspect model is indeed a stolen model. In this paper, we show that common MOR schemes in the literature are vulnerable to a different, equally important but insufficiently explored, robustness concern: a malicious accuser. We show how malicious accusers can successfully make false claims against independent suspect models that were not stolen. Our core idea is that a malicious accuser can deviate (without detection) from the specified MOR process by finding (transferable) adversarial examples that successfully serve as evidence against independent suspect models. To this end, we first generalize the procedures of common MOR schemes and show that, under this generalization, defending against false claims is as challenging as preventing (transferable) adversarial examples. Via systematic empirical evaluation we demonstrate that our false claim attacks always succeed in all prominent MOR schemes with realistic configurations, including against a real-world model: Amazon's Rekognition API.Comment: 13pages,3 figure

    Adversary Detection in Online Machine Learning Systems

    No full text
    Machine learning applications have become increasingly popular. At the same time, model training has become an expensive task in terms of computational power, amount of data, and human expertise. As a result, models now constitute intellectual property and business advantage to model owners and thus, their confidentiality must be preserved. Recently, it was shown that models can be stolen via model extraction attacks that do not require physical white-box access to the model but merely a black-box prediction API. Stolen model can be used to avoid paying for the service or even to undercut the offering of the legitimate model owner. Hence, it deprives the victim of the accumulated business advantage. In this thesis, we introduce two novel defense methods designed to detect distinct classes of model extraction attacks

    Ownership and Confidentiality in Machine Learning

    No full text
    Statistical and machine learning (ML) models have been the primary tools for data-driven analysis for decades. Recent theoretical progress in deep neural networks (DNNs) coupled with computational advances put DNNs at the forefront of ML in the domains of vision, audio and language understanding. Alas, this has made DNNs targets for a wide array of attacks. Their complexity revealed a wider range of vulnerabilities compared to the much simpler models of the past. As of now, attacks have been proposed against every single step of the ML pipeline: gathering and preparation of data, model training, model serving and inference. In order to effectively build and deploy ML models, model builders invest vast resources into gathering, sanitising and labelling the data, designing and training the models, as well as serving them effectively to their customers. ML models embody valuable intellectual property (IP), and thus business advantage that needs to be protected. Model extraction attacks aim to mimic the functionality of ML models, or even compromise their confidentiality. An adversary who extracts the model can leverage it for other attacks, continuously use the model without paying, or even undercut the original owner by providing a competing service at a lower cost. All research questions investigated in this dissertation share the common theme of the theft of ML models or their functionality. The dissertation is divided into four parts. In the first part, I explore the feasibility of model extraction attacks. In the publications discussed in this part, my coauthors and I design novel black- box extraction attacks against classification and image-translation deep neural networks. Our attacks result in surrogate models that rival the victim models at their tasks. In the second part, we investigate ways of addressing the threat of model extraction; I propose two detection mechanisms able to identify ongoing extraction attacks in certain settings with the following caveat: detection and prevention cannot stop a well-equipped adversary from extracting the model. Hence, in the third part, I focus on reliable ownership verification. By identifying extracted models and tracing them back to the victim, ownership verification can deter model extraction. In the publications discussed in this part, I demonstrate it by introducing the first watermarking scheme designed specifically against extraction attacks. Crucially, I critically evaluate the reliability of my approach w.r.t. the capabilities of an adaptive adversary. Further, I empirically evaluate a promising model fingerprinting scheme, and show that well-equipped adaptive adversaries remain a threat to model confidentiality. In the fourth part, I identify the problem of conflicting interactions among protection mechanisms. ML models are vulnerable to various attacks, and thus, may need to be deployed with multiple protection mechanisms at once. I show that combining ownership verification with protection mechanisms against other security/privacy concerns can result in conflicts. The dissertation concludes, with my observations about model confidentiality, the feasibility of ownership verification, and potential directions for future work

    Detecting organized eCommerce fraud using scalable categorical clustering

    No full text
    Kun artikkeli on julkaistu, selvitä mahdollinen embargo ja avaa tiedosto!!!Online retail, eCommerce, frequently falls victim to fraud conducted by malicious customers (fraudsters) who obtain goods or services through deception. Fraud coordinated by groups of professional fraudsters that place several fraudulent orders to maximize their gain is referred to as organized fraud. Existing approaches to fraud detection typically analyze orders in isolation and they are not effective at identifying groups of fraudulent orders linked to organized fraud. These also wrongly identify many legitimate orders as fraud, which hinders their usage for automated fraud cancellation. We introduce a novel solution to detect organized fraud by analyzing orders in bulk. Our approach is based on clustering and aims to group together fraudulent orders placed by the same group of fraudsters. It selectively uses two existing techniques, agglomerative clustering and sampling to recursively group orders into small clusters in a reasonable amount of time. We assess our clustering technique on real-world orders placed on the Zalando website, the largest online apparel retailer in Europe1. Our clustering processes 100,000s of orders in a few hours and groups 35-45% of fraudulent orders together. We propose a simple technique built on top of our clustering that detects 26.2% of fraud while raising false alarms for only 0.1% of legitimate orders.Peer reviewe

    Conflicting Interactions among Protection Mechanisms for Machine Learning Models

    No full text
    Nowadays, systems based on machine learning (ML) are widely used in different domains. Given their popularity, ML models have become targets for various attacks. As a result, research at the intersection of security/privacy and ML has flourished. Typically such work has focused on individual types of security/privacy concerns and mitigations thereof. However, in real-life deployments, an ML model will need to be protected against several concerns simultaneously. A protection mechanism optimal for a specific security or privacy concern may interact negatively with mechanisms intended to address other concerns. Despite its practical relevance, the potential for such conflicts has not been studied adequately. In this work, we first provide a framework for analyzing such conflicting interactions. We then focus on systematically analyzing pairwise interactions between protection mechanisms for one concern, model and data ownership verification, with two other classes of ML protection mechanisms: differentially private training, and robustness against model evasion. We find that several pairwise interactions result in conflicts. We also explore potential approaches for avoiding such conflicts. First, we study the effect of hyperparameter relaxations, finding that there is no sweet spot balancing the performance of both protection mechanisms. Second, we explore whether modifying one type of protection mechanism (ownership verification) so as to decouple it from factors that may be impacted by a conflicting mechanism (differentially private training or robustness to model evasion) can avoid conflict. We show that this approach can indeed avoid the conflict between ownership verification mechanisms when combined with differentially private training, but has no effect on robustness to model evasion. We conclude by identifying the gaps in the landscape of studying interactions between other types of ML protection mechanisms
    corecore